Skip to content

Implement MPI_Gatherv Wrapper for String Array as DataType and add bindings for MPI_LOGICAL and MPI_CHARACTER #126

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 26, 2025

Conversation

adit4443ya
Copy link
Collaborator

Towards #121

@adit4443ya
Copy link
Collaborator Author

lfortran is failing for the write statement here but even using its workaround it fails, whose reason is c_loc i will post MRE/fix in morning

@adit4443ya
Copy link
Collaborator Author

adit4443ya commented May 22, 2025

I locally get this after this diff

 aditya-trivedi   tests    gatherv_char ≢  ?1 ~1    git diff
diff --git a/tests/gatherv_2.f90 b/tests/gatherv_2.f90
index 8092207..3125f87 100644
--- a/tests/gatherv_2.f90
+++ b/tests/gatherv_2.f90
@@ -28,7 +28,7 @@ program gatherv_pfunit_1
     numEntries = rank + 1
     allocate(sendBuffer(numEntries))
     do i = 1, numEntries
-        write(sendBuffer(i), '(A,I0)') 'proc', rank  ! Dummy words: "proc0", "proc1", etc.
+        sendBuffer(i) = 'proc' // trim(adjustl(int2str(rank)))  ! Dummy words: "proc0", "proc1", etc.
     end do
 
     ! Allocate receive buffers on root
 

LFortran Output

Details
#################################
Using FC=lfortran --cpp compiler
Using CC=gcc compiler
################################

Received argument(s). Will only compile/run: gatherv_2.f90
Compiling gatherv_2...
Running gatherv_2 with 1 MPI ranks...
MPI_Gatherv pFUnit test passed on root
Test gatherv_2 with 1 MPI ranks PASSED!
Running gatherv_2 with 2 MPI ranks...
malloc(): corrupted top size
[Observer:30822] *** Process received signal ***
[Observer:30822] Signal: Aborted (6)
[Observer:30822] Signal code:  (-6)
[Observer:30822] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x45330)[0x7d7579445330]
[Observer:30822] [ 1] /lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x11c)[0x7d757949eb2c]
[Observer:30822] [ 2] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x1e)[0x7d757944527e]
[Observer:30822] [ 3] /lib/x86_64-linux-gnu/libc.so.6(abort+0xdf)[0x7d75794288ff]
[Observer:30822] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x297b6)[0x7d75794297b6]
[Observer:30822] [ 5] /lib/x86_64-linux-gnu/libc.so.6(+0xa8ff5)[0x7d75794a8ff5]
[Observer:30822] [ 6] /lib/x86_64-linux-gnu/libc.so.6(+0xac2fc)[0x7d75794ac2fc]
[Observer:30822] [ 7] /lib/x86_64-linux-gnu/libc.so.6(malloc+0xa4)[0x7d75794ad6f4]
[Observer:30822] [ 8] /home/aditya-trivedi/thats_me/main/lfortran/src/bin/../runtime/liblfortran_runtime.so.0(_lfortran_str_item+0x7f)[0x7d7579c09ebb]
[Observer:30822] [ 9] ./gatherv_2(+0x291c)[0x5f0ce8c3591c]
[Observer:30822] [10] ./gatherv_2(+0x2af0)[0x5f0ce8c35af0]
[Observer:30822] [11] ./gatherv_2(+0x3fd9)[0x5f0ce8c36fd9]
[Observer:30822] [12] /lib/x86_64-linux-gnu/libc.so.6(+0x2a1ca)[0x7d757942a1ca]
[Observer:30822] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x8b)[0x7d757942a28b]
[Observer:30822] [14] ./gatherv_2(+0x2209)[0x5f0ce8c35209]
[Observer:30822] *** End of error message ***
--------------------------------------------------------------------------
prterun noticed that process rank 0 with PID 30822 on node Observer exited on
signal 6 (Aborted).
--------------------------------------------------------------------------
Test gatherv_2 with 2 MPI ranks FAILED!
 aditya-trivedi   tests    gatherv_char ≢  ?1 ~1    echo $?
1

While GFortran passes the test

Output
#################################
Using FC=gfortran -cpp compiler
Using CC=gcc compiler
################################

Received argument(s). Will only compile/run: gatherv_2.f90
Compiling gatherv_2...
Running gatherv_2 with 1 MPI ranks...
 MPI_Gatherv pFUnit test passed on root
Test gatherv_2 with 1 MPI ranks PASSED!
Running gatherv_2 with 2 MPI ranks...
 MPI_Gatherv pFUnit test passed on root
Test gatherv_2 with 2 MPI ranks PASSED!
Running gatherv_2 with 4 MPI ranks...
 MPI_Gatherv pFUnit test passed on root
Test gatherv_2 with 4 MPI ranks PASSED!

... Running standalone tests took 5 seconds ...

@certik
Copy link
Collaborator

certik commented May 22, 2025

Try to figure out some workaround that works in LFortran. If not easy, then we need to fix LFortran first.

@adit4443ya
Copy link
Collaborator Author

adit4443ya commented May 23, 2025

I think we might need to fix LFortran first
I'm not able to reduce the MRE
But as per my findings
It only gets killed when more than 1 processes works, that too in case of strings, error states that it is malloc(): corrupted top size, Which is something related to how lfortran at runtime allocate array of strings on stack,

To mimic it i must use MPI to make it use 2 process at a time and if i use it , then the exact point of failure is kinda lost. Or I'm not able to get it either way I may need help here
CC: @gxyd

@gxyd
Copy link
Collaborator

gxyd commented May 26, 2025

Let me look into this today.

@gxyd
Copy link
Collaborator

gxyd commented May 26, 2025

I’ve raised an issue on the fortran_mpi repository to extract a minimal reproducible example (MRE): #128. We still need to report this to LFortran — I haven’t had the time today, but I’ll do that later. I suspect the issue is related to Fortran intrinsics like write, adjustl, or trim.

In the meantime, I’ve added a new test case for MPI_Gatherv and pushed it as the latest commit in this PR. If the tests pass, I’ll go ahead and merge it. Fortunately, this test doesn’t use the intrinsics that caused the previous failure, so it should be more stable.

@gxyd
Copy link
Collaborator

gxyd commented May 26, 2025

The tests pass, I'm merging this.

@gxyd gxyd merged commit f865434 into lfortran:main May 26, 2025
8 checks passed
@adit4443ya
Copy link
Collaborator Author

Thanks @gxyd !!

@adit4443ya adit4443ya added the enhancement New feature or request label May 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants